4 research outputs found
WiDEVIEW: An UltraWideBand and Vision Dataset for Deciphering Pedestrian-Vehicle Interactions
Robust and accurate tracking and localization of road users like pedestrians
and cyclists is crucial to ensure safe and effective navigation of Autonomous
Vehicles (AVs), particularly so in urban driving scenarios with complex
vehicle-pedestrian interactions. Existing datasets that are useful to
investigate vehicle-pedestrian interactions are mostly image-centric and thus
vulnerable to vision failures. In this paper, we investigate Ultra-wideband
(UWB) as an additional modality for road users' localization to enable a better
understanding of vehicle-pedestrian interactions. We present WiDEVIEW, the
first multimodal dataset that integrates LiDAR, three RGB cameras, GPS/IMU, and
UWB sensors for capturing vehicle-pedestrian interactions in an urban
autonomous driving scenario. Ground truth image annotations are provided in the
form of 2D bounding boxes and the dataset is evaluated on standard 2D object
detection and tracking algorithms. The feasibility of UWB is evaluated for
typical traffic scenarios in both line-of-sight and non-line-of-sight
conditions using LiDAR as ground truth. We establish that UWB range data has
comparable accuracy with LiDAR with an error of 0.19 meters and reliable
anchor-tag range data for up to 40 meters in line-of-sight conditions. UWB
performance for non-line-of-sight conditions is subjective to the nature of the
obstruction (trees vs. buildings). Further, we provide a qualitative analysis
of UWB performance for scenarios susceptible to intermittent vision failures.
The dataset can be downloaded via https://github.com/unmannedlab/UWB_Dataset
Autonomous Quadcopter Landing on a Moving Target
Autonomous landing on a moving target is challenging because of external disturbances and localization errors. In this paper, we present a vision-based guidance technique with a log polynomial closing velocity controller to achieve faster and more accurate landing as compared to that of the traditional vertical landing approaches. The vision system uses a combination of color segmentation and AprilTags to detect the landing pad. No prior information about the landing target is needed. The guidance is based on pure pursuit guidance law. The convergence of the closing velocity controller is shown, and we test the efficacy of the proposed approach through simulations and field experiments. The landing target during the field experiments was manually dragged with a maximum speed of 0.6 m/s. In the simulations, the maximum target speed of the ground vehicle was 3 m/s. We conducted a total of 27 field experiment runs for landing on a moving target and achieved a successful landing in 22 cases. The maximum error magnitude for successful landing was recorded to be 35 cm from the landing target center. For the failure cases, the maximum distance of vehicle landing position from target boundary was 60 cm